EE Times: Design News
An introduction to symbolic simulation

 
Three methods for testing functional equivalence are currently available to designers — conventional simulation, cone-based equivalence checking, and symbolic simulation. Most designers are familiar with the first two, while symbolic simulation is newer and has only been commercially available for a few years. Each method has its strengths, and the most effective approach depends on the specific application.

Symbolic simulation has been quietly gaining adoption among the world’s leading full-custom memory design teams over the last few years. In symbolic simulation, symbols such as data1, addr7, or w_enable, rather than binary 1s and 0s, are used as input vectors to simulate an RTL or Spice-level circuit. The simulator propagates symbols, rather than binary values, from inputs to outputs. The resulting output equations for the two designs are then compared and verified to be equivalent for all possible input combinations.

The use of symbolic simulation removes the RTL restrictions and circuit limitations inherent with today’s cone-based equivalence checking tools. This production-proven technology offers circuit designers the ability to directly verify the functionality of large complex memories and macro cells without having to re-code their RTL or modify their circuits as with other approaches.

Another key advantage is that designers can verify their RTL vs. Spice models much sooner in the design flow. Today’s symbolic simulation tools automatically create test benches, so designers can begin design verification earlier without waiting until RTL test benches and Spice test vectors are developed.

These new techniques can handle the complex netlists typically found in today’s gigabit memories because one symbolic vector can replace 2n binary vectors, where n is the number of inputs. Huge capacity, coupled with the ability to directly read Verilog RTL and Spice netlists, makes this approach ideal for memory verification applications.

This article outlines the key challenges of memory verification, and goes on to describe how symbolic simulation and its underlying technologies offer advantages for verifying full-custom circuit designs, such as memories and macro-cells. Application examples are provided along with a few examples of the types of problems that are often uncovered by using symbolic simulation early in the design flow.

Conventional simulation handles moderate-sized designs

Conventional simulation is perhaps the most versatile verification method available. To demonstrate that two designs are equivalent, the verification team merely simulates large sets of random or manually generated input vectors on both the reference and implementation designs and then checks that corresponding outputs are identical. This setup is depicted in Figure 1.


Figure 1 — Equivalence checking with conventional simulation.

The primary benefit of this method is that there are no constraints on either the reference or implementation models, other than they have the same ports. Because native simulators are being used, the full expressiveness of both languages can be utilized.

This means that the reference is not limited to a synthesizable subset of Verilog or VHDL, and that the implementation can be simulated down to full Spice accuracy. The two designs can also have arbitrarily different state encodings, and can even contain pulsed or self-timed circuits.

The drawback of this approach is completeness. Because the number of potential input vectors grows exponentially relative to the size of a design, it is impossible to exhaustively simulate designs with any significant number of inputs. This means there may be large holes in the verification process unless designers spend significant amounts of time ensuring that the test vector suite fully exercises the design’s behavior. Simulation is often used due to its simplicity and ability to handle a wide range of design styles, but its lack of completeness has forced most designers to look for more advanced methods.

Cone-based equivalence checking brings completeness

Cone-based equivalence checking (also called static equivalence checking) has become the workhorse methodology for cell-based designers. As the name implies, cone-based checkers decompose the reference and implementation designs into logic cones which are groups of logic bordered by registers, ports, or black boxes. Each cone, therefore, represents the fan-in logic of a register or primary output of the module or chip being verified. An example of this process is shown in Figure 2.

During equivalence checking, each cone is matched to its equivalent in the other model, and their logical functions are compared using formal analysis methods. If the state elements in each model are identical, the sequential behavior of the two designs will match if each of the individual cone pairs is logically equivalent.


Figure 2 — Design composed into logic cones

Cone-based equivalence checking has two primary advantages: completeness and efficiency. This methodology is extremely fast when applied to specific design styles. In general, cone-based equivalence checking provides maximum benefit to the user when the following conditions apply:

  • The reference model is constructed using synthesizable RTL.
  • The implementation model contains gate- or switch-level models.
  • The state points of the reference and implementation models are either aligned or can be aligned through automated setup methodologies.
The drawback for cone-based EC is flexibility. It is not well-suited for applications that require the verification of non-synthesizable models, transistor-level netlists, cells with any type of analog behavior, or highly regular cells (such as those found in memories). For such designs, symbolic simulation provides designers with a third option.

Symbolic simulation offers hybrid approach

Symbolic simulation can be considered a hybrid of the two previous methodologies. As in conventional simulation, equivalence is determined by simulating both designs and verifying that their outputs match over a number of cycles. By adapting formal techniques from cone-based equivalence checking, symbolic simulators perform this comparison quickly and exhaustively, even for extremely large designs.

While conventional simulation applies 0s and 1s to the inputs, symbolic simulation applies symbols, where each symbol can simultaneously represent both 0 and 1. One symbolic vector, when applied to a model or netlist with n inputs, can replace 2n “binary” vectors, thereby providing much more comprehensive coverage. The symbolic simulator propagates the equations for the symbols towards the outputs, producing the symbolic result that is used to verify equivalence as shown in Figure 3.


Figure 3 — Equivalence checking with symbolic simulation.

The equations on the outputs from the symbolic simulator represent the results of all input combinations that could have been created by assigning a 1 or 0 to any of the input symbols. Therefore, these results are the same as would be obtained by simulating with an exhaustive set of traditional vectors, providing vastly improved coverage over conventional simulation in less time.

For example, a 256-entry, 32-bit SRAM will have at least 40 inputs, resulting in 2(40*4) (> 1050) possible 4-cycle vectors. Even if a conventional simulator could complete each run in less than a second, it would take a billion CPUs over 1015 trillion years to exhaustively simulate the design, whereas a symbolic simulator can complete this task in about an hour.

Symbolic simulation is capable of comparing two designs at any abstraction level (including behavioral RTL or transistor-level Spice), even if the two designs have radically different state encodings. This technology is extremely beneficial for full-custom memory verification. For that reason it’s often used in conjunction with cone-based equivalence checking to verify SoC designs that include both synthesized logic, and full-custom blocks, such as memories.

Selecting the right technology

As shown in Figure 4, the three methodologies presented can be considered to lie on a continuum, with cone-based equivalence checking at one end and conventional simulation on the other. Cone-based equivalence checking is typically the most complete and efficient approach, but there are restrictions on its use for some design styles. Conventional simulation is effectively unlimited in terms of design styles supported, but has severe coverage and completeness problems.

Symbolic simulation fills an important slot in the middle, high-capacity transistor-based verification space. In addition to its ability to verify a wide range of behavioral models and transistor netlists, symbolic simulation utilizes timing models that are effective for many types of full-custom designs — including those containing self-timed circuits or bi-stable analog components such as sense-amplifiers.


Figure 4 — Continuum of equivalence-checking methods

One limitation of both conventional and symbolic simulation approaches is that they can only verify equivalence over a specified number of cycles. Cone-based equivalence checking, on the other hand, is valid for infinite-length vectors because it correlates individual state points. While this approach guarantees that two designs will have identical sequential behavior, it also provides a limitation in that functionality must generally match at a state-point level in the design.

Symbolic simulation, however, is the ideal approach for flow-through designs, such as memories or designs where the majority of state elements simply provide pipelining. In such cases, a design’s sequential depth can be determined and its equivalence can be proven by symbolic simulation.

A good rule of thumb is that simulation-based approaches are most suitable for designs in which the majority of state elements hold data rather than control information. This includes memories, FIFOs, register files, pipelined execution units, and so forth.

This contrast also raises another benefit for binary and symbolic simulation in that the functionality of an RTL and transistor netlist need only match at the block boundary. This ability to use widely variant encoding schemes provides complete freedom to the design team in their selection of implementation options.

Generally, if cone-based equivalence checking can meet the verification requirements of a design, then it should be considered the preferred methodology because it is fast and requires very little effort to run. However, if any of those constraints are not met, and the design is primarily digital in nature, then symbolic simulation-based equivalence checking would be preferred.

Finally, if the design contains substantial analog content, such as PLLs, A/Ds and D/As, then designers may be forced to rely on conventional simulation with random or hand-generated vectors for equivalence checking. Symbolic simulation techniques are generally applicable for functions that contain analog functions that resolve to a digital output, which are not well supported by cone-based equivalence checking tools. Examples of these types of circuits include memory sense amps, I/O level shifters, and pulse generators.

Circuit challenges for custom designers

While full-custom digital circuits (such as memories and I/O cells) pose problems for cone-based EC tools, they are particularly amenable to symbolic simulation. The following sections discuss a few specific examples where symbolic simulation is the preferred equivalence-checking methodology. While these examples are most relevant to full-custom memory designers, it is worth noting that library developers also benefit from the technology because it handles more than just standard combinatorial gates. Dynamic logic, for example, is easily read and verified by symbolic simulation-based tools.

Behavioral models
Memory models such as embedded SRAMs, ROMs and register files are typically written in behavioral Verilog because they are easier to write and extremely efficient for full-chip simulation. Such models are usually intended to represent the intended behavior of the design rather than the actual structure or implementation, and in most cases cannot be synthesized into flip-flops and combinational logic.

Because cone-based equivalence checkers require the models to be represented as combinational logic, the behavioral models must be translated to a functionally equivalent state-based representation, such as RTL. This can be a difficult task, error prone at least, that imposes unnecessary restrictions on logic designers and decreases productivity. Symbolic simulation directly handles behavioral code, eliminating these restrictions.

Pulsed flip-flops
Pulse signals are common practice in high-performance datapath designs. Most pulse signals are generated using delay circuits. Because cone-based equivalence checkers do not handle circuit delay, pulse-based logic cannot be checked. Instead, the designer is required to “fake” the pulse using buffers and to enlarge the pulse to a clock phase.

This significantly complicates the modeling effort and lowers confidence in the result of the verification. Symbolic simulation, when used in conjunction with timing-accurate models, takes circuit delay into account and can verify equivalence for pulse-based logic.

Self-timed circuits
Self-timed circuits are often used to improve the timing of SRAM read operations. Inverter delay chains are simple to design but difficult to scale, especially for memory compilers that need to generate memory blocks with sizes ranging from kilobits to megabytes. Furthermore, inverter chain delays will not reliably track the read-circuitry timing across process variations. Often, replica techniques using a dummy column are deployed for advanced SRAM designs. Cone-based logic equivalence checkers cannot model delays or handle self-timed circuits. Thus, they cannot verify equivalence of any memory designs where the timing is determined by dummy columns. Timing-accurate symbolic simulation can model self-timed circuits.

Bidirectional transistors
Column decoder circuits typically require four pass transistors that are controlled by column select signals. The directions of these transistors depend on whether the memory is currently being read or written. This bidirectional behavior cannot be easily modeled at the gate level, in which the closest equivalent is most likely a multiplexer.

Because cone-based equivalence checkers rely on abstracting transistors to the gate level, they cannot handle bidirectional transistors unless a fairly complicated manual model is provided. This further increases the modeling effort and reduces confidence in the result. Because symbolic simulation does not rely on logic abstraction techniques, it is able to verify logic functionality, not just circuit topology.

Sense-amp logic
Sense-amplifier (sense-amp) techniques have become increasingly common in high-performance datapath circuits and memories. Those used in memories may be designed as current or voltage sense amps.

Because cone-based equivalence checkers assume all signals are full-rail signals, they cannot verify circuits with sense amplifiers unless the circuit block containing the sense amplifier is replaced with a manually generated model. This modeling is tedious and error-prone, especially if additional logic is embedded inside the sense amplifier block. While sense-amps are essentially analog in nature, they can still be verified by symbolic simulation coupled with advanced circuit-modeling techniques.

Applying symbolic simulation to custom circuits

Symbolic simulation techniques can be used throughout the design process. Used early in the flow, symbolic simulation identifies simple mistakes that often take days to find with simulation alone. As the simulation models and circuit designs mature, symbolic simulation can be used to flag errors that are not real because they are not legal conditions.

Once properly constrained, these types of errors are eliminated. Finally, symbolic simulation can be used for “check-in” tests. Using this technique, tests are scripted into design flows and run automatically to ensure that any errors introduced by late-stage changes to RTL models or Spice netlists are found promptly before they are incorrectly introduced into a released set of simulation models or circuit designs.

Using symbolic simulation
Today’s symbolic simulation tools can handle a wide range of custom memories, libraries and macro cells “out of the box.” Single, dual and many multi-port RAMs run quickly with little set-up. The ability to read RTL models and Spice netlists directly without RTL restrictions or circuit limitations enables designers to use the tools early in the design flow without waiting for test patterns to be developed. The accuracy and “circuit smarts” available today means that designers can quickly set-up and run the tools without extracting detailed circuit information or “black boxing” functions such as sense amps and pulse generators.

With a little experience, designers can get a new block of logic up and running within a few hours. The performance is fast, which makes iterative runs go very quickly — often in seconds or minutes. This fast turn-around helps identify “dumb” errors quickly. As designs mature, the run times usually peak out at an hour or hours, which is adequate time to verify complex memories and macros.

Because these tools provide a thorough analysis of models vs. netlists for all possible input conditions, designers need to be aware of input combinations that should not or can not take place during normal circuit operations. This is especially true when verifying block-level functionality because symbolic simulation is very effective at finding combinations which designers might not consider. Circuit designs are often very detailed, while simulation models are more general in nature. While this innate capability to verify functionality under all possible conditions leads to an extremely thorough verification, it can also lead to false mismatches.

This may happen, for example, if designers are not aware of strange or unusual input port combinations that are not legal and try to verify proper operation. In such cases designers must take time to constrain the appropriate inputs to prevent these combinations from being verified.

While care is needed in applying these new symbolic simulation-based tools, there are benefits to using them other than their obvious ability to find differences between simulation models and Spice circuits. Debugging differences, for example, is illustrative of the difference between cone-based tools, vector tools, and symbolic simulation tools.

Symbolic simulation runs very quickly because one symbol can replace many vectors. When differences are found, the simulation automatically re-runs in vector mode and dumps out binary vectors that expose the mismatch. Waveforms can then be compared using these binary vectors, enabling differences to be pinpointed quickly.

The time from error detection by symbolic simulation until the creation of these binary vectors that drive a waveform display is usually a few minutes or less. When debugging using traditional vector-based simulation, designers are often faced with analyzing a huge vector set that’s been run over hundreds or thousands of cycles.

Such errors often occur long before the failure is visible on the output pins. The use of symbolic simulation has been found to provide productivity improvements of 10x to 100x in such situations because it produces the simplest possible vector that demonstrates the error, eliminating huge amounts of distracting data.

Examples of problems found
Running symbolic simulation early in the design flow has proven useful because the tools run quickly and can be used to find problems early. This is important because full-custom digital design involves a higher degree of human involvement than synthesis-based design where humans tend to get involved to fine-tune the design results later in the design flow.

Full-custom memory design problems are grouped into three categories:

  • Bus mis-labeling, column decoder multiplexer input lines switched, memory cell interconnect wiring improperly connected, wires left floating, and other simple wiring or logic errors.
  • Improper write/read sequences, corner cases not properly or completely defined by simulation models, port mis-matches, and other simulation model errors or differences.
  • Bit-cell operation, sense-amp firing, self-timed loops and other circuit-level problems.
While it’s clearly important to catch mistakes prior to tape-out, tools that flag errors even sooner are of great benefit. In this case, designers are rewarded with higher productivity because mistakes found earlier in the design flow are much easier to fix. Another reason for the popularity of symbolic simulation is that it can be used before completing RTL test benches and simulation test patterns.

Limitations and areas for improvement
Some of today’s larger and more complex memories require additional set-up before verification can be completed using symbolic simulation techniques. While debugging circuit problems is straight forward, there are times when mapping differences between a transistor netlist and RTL model takes more time than expected.

As with all EDA tools available today, symbolic simulation tools will continue to benefit from increased performance and enhanced debug and diagnostic aids. Graphical user interfaces will not only encourage new users to adopt the tool earlier in the design phase but also improve the efficiency of experienced users.

Summary

Equivalence checking has been widely adopted as part of the verification flow for full-custom memories and macros by circuit designers today. Cone-based equivalence checking is extremely effective for synthesizable designs with gate-level implementations, but cannot be easily adapted to behavioral models or full-custom circuits such as memories, standard cell & I/O libraries, and custom logic blocks.

Symbolic simulation extends cone-based equivalence checking coverage into the full-custom space. It is a complement to, and should be used in conjunction with, the other equivalence-checking methodologies. Symbolic simulation techniques are especially productive when used early in the design flow as simulation models and circuit designs are being created.

Paul Hoxey is a VLSI Design Engineer at ARM. He’s responsible for designing and verifying full-custom memories and macros for embedded core cells.

Clayton McDonald is an R&D Circuit Manager at Synopsys. He and his team develop circuit design and analysis software used in transistor-level verification tools.

David Guinther is a Product Marketing Manager at Synopsys. He works with designers, application engineers and software developers to specify and develop sign-off tools for ICs.